hysop.core.mpi package¶
Hysop interface to the mpi implementation.
It contains :
mpi basic variables (main communicator, rank, size …)
hysop.topology.topology.CartesianTopology
: mpi process distribution + local mesh
This package is used to hide the underlying mpi interface in order to make any change of this interface, if required, easiest.
At this time we use mpi4py : http://mpi4py.scipy.org
- hysop.core.mpi.Wtime() float ¶
Function to return elapsed time since some time in the past. Usage: tref = Wtime() # proceed with some computations … elapsed = Wtime() - tref # -> elapsed == time for ‘some computations’ on the current mpi process
- hysop.core.mpi.host_comm = <mpi4py.MPI.Intracomm object>¶
Intrahost communicator
- hysop.core.mpi.host_rank = 0¶
Intrahost rank
- hysop.core.mpi.host_size = 1¶
Intrahost size
- hysop.core.mpi.interhost_comm = <mpi4py.MPI.Intracomm object>¶
Interhost communicator (between each host local master rank)
- hysop.core.mpi.interhost_rank = 0¶
Communicator rank between hosts
- hysop.core.mpi.interhost_size = 1¶
Communicator size between hosts
- hysop.core.mpi.intershm_comm = <mpi4py.MPI.Intracomm object>¶
Communicator between shared memory local master ranks
- hysop.core.mpi.intershm_rank = 0¶
Communicator rank between shm masters
- hysop.core.mpi.intershm_size = 1¶
Communicator size between shm masters
- hysop.core.mpi.is_multihost = False¶
True if the programm runs on different hosts
- hysop.core.mpi.is_multishm = False¶
True if the programm runs on different shared memory communicators
- hysop.core.mpi.main_comm = <mpi4py.MPI.Intracomm object>¶
Main communicator
- hysop.core.mpi.main_rank = 0¶
Rank of the current process in main communicator
- hysop.core.mpi.main_size = 1¶
Number of mpi process in main communicator
- hysop.core.mpi.processor_hash = 1138720636¶
MPI hashed processor name as integer (fits into a 32bit signed integer)
- hysop.core.mpi.processor_name = 'runner-cqxjjwvr-project-13672-concurrent-0'¶
MPI processor name
- hysop.core.mpi.shm_comm = <mpi4py.MPI.Intracomm object>¶
Shared memory communicator
- hysop.core.mpi.shm_rank = 0¶
Shared memory process id in shm_comm (ie. NUMA node id)
- hysop.core.mpi.shm_size = 1¶
Shared memory process count in shm_comm (ie. NUMA nodes count)
Submodules¶
- hysop.core.mpi.bridge module
- hysop.core.mpi.redistribute module
- hysop.core.mpi.topo_tools module
TopoTools
TopoTools.compare_comm()
TopoTools.compare_groups()
TopoTools.convert_ranks()
TopoTools.create_subarray()
TopoTools.create_subarray_from_buffer()
TopoTools.gather_global_indices()
TopoTools.gather_global_indices_overlap()
TopoTools.initialize_tag_parameters()
TopoTools.intersection_size()
TopoTools.is_parent()
TopoTools.set_group_size()